背景
这里是使用的kubespray
部署的大规模集群,集群网络插件calico,并使用etcd做calico后端。
修改cluster-cidr并不是一件简单的事情,谨慎操作
Changing an IP pool
主要的流程 :
- Install calicoctl as a Kubernetes pod (Source)
- Add a new IP pool (Source).
- Disable the old IP pool. This prevents new IPAM allocations from the old IP pool without affecting the networking of existing workloads.
- Change nodes
podCIDR
parameter (Source) - Change
--cluster-cidr
onkube-controller-manager.yaml
on master node. (Credits to OP on that) - Recreate all existing workloads that were assigned an address from the old IP pool.
- Remove the old IP pool.
我们开始:
在这个事例中, 我们将10.234.0.0/15
替换为192.232.0.0/14
- 添加一个新的ippool:
1 | calicoctl create -f -<<EOF |
现在应该有两个enabled的 IP pools,来看一下
1 | calicoctl get ippool -o wide |
- Disable 旧的 IP pool.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21# calicoctl get ippool -o yaml > pool.yaml
# cat pool.yaml
apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
cidr: 10.234.0.0/15
ipipMode: Always
natOutgoing: true
- apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: new-pool
spec:
cidr: 192.232.0.0/14
ipipMode: Always
natOutgoing: true
Note: 这里为了提高可读性,并防止apply出错,可以删除一些无用的字段
blockSize大小注意默认是/26
修改文件将旧的ippooldefault-ipv4-ippool
禁用, disabled: true
:
1 | apiVersion: projectcalico.org/v3 |
应用配置修改:
1 | calicoctl apply -f pool.yaml |
再来查看一下配置信息 calicoctl get ippool -o wide
:
1 | NAME CIDR NAT IPIPMODE DISABLED |
- 修改 nodes
podCIDR
参数:
使用新的IP cidr覆盖指定的k8s节点资源上的podCIDR
参数:
如果节点数量少的话,可以简单替换
1 | $ kubectl get no kubeadm-0 -o yaml > file.yaml; sed -i "s~10.234.0.0/24~192.232.0.0/24~" file.yaml; kubectl delete no kubeadm-0 && kubectl create -f file.yaml |
如果数量庞大的话,比如我这里接近1000node,显然手工一台台替换是不可能的,我们可以准备一个node.yml
1 | kind: Node |
只需替换$hostname/$ipcidr/$role即可,首先获取原集群这hostname和role的对应关系
1 | node=`kubectl get node -owide |awk '{print $1}'|grep -v NAME` |
导出对应关系后,追加ipcidr(提前规划好),例如:
1 | 192.232.1.0 |
合并列表,格式$hostname $role $ipidr
,例如host_lists
:
1 | test1 node 192.168.0.0/24 |
依次替换变量并apply:
1 |
|
我们必须对我们拥有的每个节点执行此操作。注意IP范围,它们在每一个节点之间是不同的。
- 修改 kube-proxy \ kubeadm-config ConfigMap 和 kube-controller-manager.yaml 的CIDR
编辑 kubeadm-config ConfigMap 并 修改 podSubnet 至新IP Range:
1 | kubectl -n kube-system edit cm kubeadm-config |
然后, 修改master节点的 --cluster-cidr
/etc/kubernetes/manifests/kube-controller-manager.yaml
1 | $ sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml |
修改calico配置
ipv4_pools
/etc/cni/net.d/10-calico.conflist 和 /etc/cni/net.d/calico.conflist.template重启所有pod:
1 | kubectl delete pod -n kube-system kube-dns-6f4fd4bdf-8q7zp |
检查workload calicoctl get wep --all-namespaces
:
1 | NAMESPACE WORKLOAD NODE NETWORKS INTERFACE |
- 删除旧的 IP pool:
1 | calicoctl delete pool default-ipv4-ippool |
Creating it correctly from scratch
To deploy a cluster under a specific IP range using Kubeadm and Calico you need to init the cluster with --pod-network-cidr=192.168.0.0/24
(where 192.168.0.0/24
is your desired range) and than you need to tune the Calico manifest before applying it in your fresh cluster.
To tune Calico before applying, you have to download it’s yaml file and change the network range.
- Download the Calico networking manifest for the Kubernetes.
1 | $ curl https://docs.projectcalico.org/manifests/calico.yaml -O |
- If you are using pod CIDR
1 | 192.168.0.0/24 |
, skip to the next step. If you are using a different pod CIDR, use the following commands to set an environment variable called
1 | POD_CIDR |
containing your pod CIDR and replace
1 | 192.168.0.0/24 |
in the manifest with your pod CIDR.
1 | $ POD_CIDR="<your-pod-cidr>" \ |
- Apply the manifest using the following command.
1 | $ kubectl apply -f calico.yaml |